172 research outputs found

    May 25th, 2017

    Get PDF
    After vivid discussions led by the emergence of the buzzword “Big Data”, it seems that industry and academia have reached an objective understanding about data properties (volume, velocity, variety, veracity and value), the resources and “know how” it requires, and the opportunities it opens. Indeed, new applications promising fundamental changes in society, industry and science, include face recognition, machine translation, digital assistants, self-driving cars, ad-serving, chat-bots, personalized healthcare, smart industry and more. The first lesson of the era of “Big Data” is that it is possible to access and exploit representative “samples” of available data collections thanks to the availability of the necessary resources for storing it and running greedy processing tasks on it. The second lesson is that computer science and mathematics disciplines must generate synergy with other sciences in order to exploit these new available “value”. The consequence is the emergence of “new” data centric sciences: data science, digital humanities, social data science, network science, computational science. These sciences with their new requirements and challenges call for a need to revisit the fundaments of databases, artificial intelligence and other disciplines used for addressing them with new perspectives. This novel and multidisciplinary data centric and scientific movement, promises new and not yet imagined applications that rely on massive amounts of evolving data that need to be cleaned, integrated and analysed for modelling purposes. Yet, data management issues are not usually perceived as central. In this lecture I will explore the key challenges and opportunities for data management in this new scientific world, and discuss how a possible data centric artificial intelligence community can best contribute to these exciting domains. If the moto is not academic, huge numbers of dollars being devoted to related applications are moving industry and academia to analyse these directions

    Message from AICCSA\u272022 General Chairs

    Get PDF
    With great pleasure we welcome the delegates of the 19th IEEE Conference on Computer Systems and Applications (AICCSA\u272022) that will be held on the campus of Zayed University in Abu-Dhabi, United Arab Emirates (UAE) from December 5 to 7, 2022. The essence of AICCSA in computer systems research remains strong, healthy, and vibrant, with an acceptance rate of 42%. AICCSA\u272022 IEEE proceedings will publish the accepted authors\u27 works on Xplore following the conference. AICCSA uses a double-blind review process whereby each submission undergoes a rigorous review process. The breadth of topics in the submitted papers reflects the scope of areas in which computer systems evolve

    Model2Roo : Web Application Development based on the Eclipse Modeling Framework and Spring Roo

    No full text
    ISBN: 978-87-643-1014-6International audienceInherent complexity in web application development is continually increasing, due to technical challenges, like new programming frameworks and tools, and also due to changes in both functional and non-functional requirements of web applications. In this context, model-driven techniques can currently be used to guide the development of web systems, by focusing on different levels of modeling abstractions that encapsulate both implementation details and the de nition of system requirements. This paper presents Model2Roo, a tool intended for Java web application development, that relies on the Eclipse Modeling Framework and on the Spring Roo project. In particular, this paper outlines key issues highlighted by previous users of the tool, and also demonstrates recent implemented features

    Generation of an Architecture View for Web Applications using a Bayesian Network Classifier

    No full text
    Session: Software EngineeringInternational audienceA recurring problem in software engineering is the correct definition and enforcement of an architecture that can guide the development and maintenance processes of software systems. This is due in part to a lack of correct definition and maintenance of architectural documentation. In this paper, an approach based on a bayesian network classifier is proposed to aid in the generation of an architecture view for web applications developed according to the Model View Controller (MVC) architectural pattern. This view is comprised of the system components, their inter-project relations and their classification according to the MVC pattern. The generated view can then be used as part of the system documentation to help enforce the original architectural intent when changes are applied to the system. Finally, an implementation of this approach is presented for Java based-systems, using training data from popular web development frameworks

    Model-Driven Cloud Data Storage

    No full text
    ISBN: 978-87-643-1014-6 - http://www2.imm.dtu.dk/conferences/ECMFA-2012/proceedings/International audienceThe increasing adoption of the cloud computing paradigm has motivated a re definition of traditional software development methods. In particular, data storage management has received a great deal of attention, due to a growing interest in the challenges and opportunities associated to the NoSQL movement. However, appropriate selection, administration and use of cloud storage implementations remain a highly technical endeavor, due to large differences in the way data is represented, stored and accessed by these systems. This position paper motivates the use of model-driven techniques to avoid dependencies between high-level data models and cloud storage implementations. In this way, developers depend only on high-level data models, and then rely on transformation procedures to deal with particular cloud storage details, such as different APIs and deployment providers, and are able to target multiple cloud storage environments, without modifying their core data models

    Big continuous data: dealing with velocity by composing event streams

    No full text
    International audienceThe rate at which we produce data is growing steadily, thus creating even larger streams of continuously evolving data. Online news, micro-blogs, search queries are just a few examples of these continuous streams of user activities. The value of these streams relies in their freshness and relatedness to on-going events. Modern applications consuming these streams need to extract behaviour patterns that can be obtained by aggregating and mining statically and dynamically huge event histories. An event is the notification that a happening of interest has occurred. Event streams must be combined or aggregated to produce more meaningful information. By combining and aggregating them either from multiple producers, or from a single one during a given period of time, a limited set of events describing meaningful situations may be notified to consumers. Event streams with their volume and continuous production cope mainly with two of the characteristics given to Big Data by the 5V’s model: volume & velocity. Techniques such as complex pattern detection, event correlation, event aggregation, event mining and stream processing, have been used for composing events. Nevertheless, to the best of our knowledge, few approaches integrate different composition techniques (online and post-mortem) for dealing with Big Data velocity. This chapter gives an analytical overview of event stream processing and composition approaches: complex event languages, services and event querying systems on distributed logs. Our analysis underlines the challenges introduced by Big Data velocity and volume and use them as reference for identifying the scope and limitations of results stemming from different disciplines: networks, distributed systems, stream databases, event composition services, and data mining on traces

    From Text to Knowledge with Graphs: modelling, querying and exploiting textual content

    Full text link
    This paper highlights the challenges, current trends, and open issues related to the representation, querying and analytics of content extracted from texts. The internet contains vast text-based information on various subjects, including commercial documents, medical records, scientific experiments, engineering tests, and events that impact urban and natural environments. Extracting knowledge from this text involves understanding the nuances of natural language and accurately representing the content without losing information. This allows knowledge to be accessed, inferred, or discovered. To achieve this, combining results from various fields, such as linguistics, natural language processing, knowledge representation, data storage, querying, and analytics, is necessary. The vision in this paper is that graphs can be a well-suited text content representation once annotated and the right querying and analytics techniques are applied. This paper discusses this hypothesis from the perspective of linguistics, natural language processing, graph models and databases and artificial intelligence provided by the panellists of the DOING session in the MADICS Symposium 2022

    A Holistic Approach for Measuring Quality of Life in “La Condesa” Neighbourhood in Mexico City

    Get PDF
    This paper presents our approach for computing an index of quality of life (QoL) through a data science methodology considering quantitative and qualitative measures. Our methodology seeks to help to promote the maximisation of a holistic return of investment that we propose and name elasticity of quality of life (E-QoL). E-QoL models and calibrates variables associated to social, economic, cultural, psychological, health perspectives to find an “optimum” composite benefit including economic and “wellbeing” aspects. Our notion of return of investment expressed by the E-QoL is holistic because it considers different objective and subjective perspectives to provide a multi-facet understanding of QoL and the way it can impact the perception of citizens about their wellbeing (e.g., happiness, self-fulfilment, satisfaction). Wellbeing is an immaterial perception difficult to measure. Yet, supported by studies on the impact of quality of life on wellbeing perception, our claim is that studying measurable variables and combining them in a Mathematical model it can be possible to analyse and understand wellbeing subjective perception. Studying wellbeing subjective perception under different objective and subjective views to provide a holistic study of QoL. Thus, the model would help us to experiment and explore different perspectives of citizens’ wellbeing perception and how changes in the urban environment impact it. This work has chosen the historical neighbourhood La Condesa located 4 Km. from the Historical Downtown Area of Mexico City as case study where governmental programs have been designed to promote the construction of social lodging and in consequence bring inhabitants, leisure activities and business life to central territories. It seems that the growth in economy, population increase in the neighbourhood has not been compatible with citizens QoL and wellbeing perception and has not led to the increase of the quality of life index observed in the neighbourhood.Cet article présente notre approche pour établir un indice de la qualité de vie (IQV) à l'aide d'une méthodologie de science des données qui tient compte de mesures quantitatives et qualitatives. Notre méthodologie cherche à promouvoir la maximisation d'un retour sur investissement holistique que nous proposons et que nous appelons élasticité de la qualité de vie (E-QoL). E-QoL modélise et calibre les variables associées aux perspectives sociales, économiques, culturelles, psychologiques, sanitaires pour trouver un bénéfice composite « optimal » qui inclue les aspects économiques et « bien être ». Notre notion de retour sur investissement exprimée par e-QoL est holistique parce qu'elle considère différentes perspectives objectives et subjectives pour fournir une compréhension multiforme de la qualité de vie et de la façon dont elle peut influencer la perception des citoyens de leur bien-être (par exemple, le bonheur, l'épanouissement personnel, la satisfaction). Le bien-être est une perception immatérielle difficile à mesurer. Pourtant, nous appuyant sur des études sur l'impact de la qualité de vie sur la perception du bien-être, nous avançons que l'étude de variables mesurables et leur combinaison dans un modèle mathématique permet d'analyser et comprendre la perception subjective du bien-être. Étudier la perception subjective du bien-être sous différents points de vue objectifs et subjectifs nous permet de fournir une étude holistique de la qualité de vie. Ainsi, le modèle nous aiderait à expérimenter et à explorer différentes perspectives de la perception du bien-être des citoyens et de l'impact des changements de l'environnement urbain sur celui-ci. Ce travail a choisi comme étude de cas le quartier historique de La Condesa, situé à 4 km du centre-ville historique de Mexico, où des programmes gouvernementaux ont été conçus pour promouvoir la construction de logements sociaux et, par conséquent, rapprocher les habitants, les activités de loisirs et la vie professionnelle des territoires centraux. Il semble que la croissance de l'économie, l'augmentation de la population dans le quartier n'a pas été compatible avec la qualité de vie et la perception du bien-être des citoyens et n'a pas conduit à l'augmentation de l'indice de qualité de vie observé dans le quartier.Este artículo presenta nuestra propuesta para establecer un índice de calidad de vida (QQL) que tenga en cuenta las medidas cuantitativas y cualitativas, utilizando una metodología de ciencia de datos. Nuestra metodología busca promover la maximización de un retorno holístico de la inversión que proponemos y que llamamos elasticidad de calidad de vida (E-QoL). E-QoL modela y calibra las variables asociadas con las perspectivas sociales, económicas, culturales, psicológicas y de salud para encontrar un beneficio complejo «óptimo» que incluya los aspectos económicos y de «bienestar». Nuestra noción de retorno de la inversión expresada por e-QoL es holística porque considera diferentes perspectivas objetivas y subjetivas para proponer una comprensión multifacética de la calidad de vida y determinar cómo puede influir en la percepción de los ciudadanos sobre su bienestar (por ejemplo, felicidad, desarrollo personal, satisfacción). El bienestar es una percepción intangible difícil de medir. Sin embargo, con base en estudios sobre el impacto de la calidad de vida en la percepción del bienestar, sostenemos que el estudio de variables medibles y su combinación en un modelo matemático permite analizar y comprender la percepción subjetiva del bienestar. Estudiar la percepción subjetiva del bienestar desde diferentes perspectivas objetivas y subjetivas nos permite proporcionar un estudio holístico de la calidad de vida. Por lo tanto, el modelo nos ayudaría a experimentar y explorar diferentes perspectivas de la percepción del bienestar de los ciudadanos y el impacto de los cambios en el entorno urbano.Este trabajo eligió al barrio histórico de La Condesa, ubicado a 4 km del centro histórico de la Ciudad de México, para un caso de estudio. En este territorio los programas gubernamentales han sido diseñados para promover la construcción de viviendas sociales y, por lo tanto, dar acceso a los habitantes a actividades de ocio y vida profesional en regiones centrales. Parece que el crecimiento de la economía y el aumento de la población en el barrio no fueron compatibles con la calidad de vida y la percepción del bienestar de los ciudadanos y no condujeron al aumento del índice de calidad de vida observado en el barrio

    Hybrid query plan generation

    No full text
    http://ceur-ws.org/Vol-911 - Regular PaperInternational audienceA hybrid query is a requirement of data produced by data services and a set of QoS preferences w.r.t. the query execution. In this paper we present the problem of the hybrid query optimization and, in particular, the generation of a search space of hybrid query plans. We show how the constraints for generating hybrid query plans are modeled and validate these constraints by implementing them in an action language. We present graphs with experiment results that show the complexity of this generation

    NoXperanto: Crowdsourced Polyglot Persistence

    No full text
    This paper proposes NoXperanto , a novel crowdsourcing approach to address querying over data collections managed by polyglot persistence settings. The main contribution of NoXperanto is the ability to solve complex queries involving different data stores by exploiting queries from expert users (i.e. a crowd of database administrators, data engineers, domain experts, etc.), assuming that these users can submit meaningful queries. NoXperanto exploits the results of meaningful queries in order to facilitate the forthcoming query answering processes. In particular, queries results are used to: (i) help non-expert users in using the multi- database environment and (ii) improve performances of the multi-database environment, which not only uses disk and memory resources, but heavily rely on network bandwidth. NoXperanto employs a layer to keep track of the information produced by the crowd modeled as a Property Graph and managed in a Graph Database Management System (GDBMS)
    corecore